We consider various versions of adaptive Gibbs and Metropolis-within-Gibbssamplers, which update their selection probabilities (and perhaps also theirproposal distributions) on the fly during a run by learning as they go in anattempt to optimize the algorithm. We present a cautionary example of how evena simple-seeming adaptive Gibbs sampler may fail to converge. We then presentvarious positive results guaranteeing convergence of adaptive Gibbs samplersunder certain conditions.
展开▼